This week I continued work on implementing Squeeze(N/D)et with our data. I returned to our pytorch repository for the implementation and got "tantilizingly close" to success.
This day I got to work on making that just past week's repo for squeezeDet run on our dataset. Turned out the computer ran out of storage after only so long (~1 day of running) and did not finish. I cleared out the space and had it run for about two hours so by the time I would be ready to check its results it would have finished. It is decently accurate, which is promising for work.
Afterward, I began modifying a copy of the repo. I moved files from our repo to my copy of theirs and edited files already in theirs to make the change over from KITTI. Because the change is so radical, I had a lot of difficulty even getting it to start to try to run; the code would start, then catch an error and crash. I will have addressed this error this next day.
This day I still had no clue on how to overcome the error from that previous day, so I went back to our original repository (pytorch) and renewed my attempt to implement squeezenet into it. I did decently well until I reached an error caused by the input being 2D where it should be 4D; I do not know where the input is coming from or how it is two dimentions different from what is expected. The code for other nets (alexnet, vgg16) work just fine and those are the ones I had copied from to make squeezenet (in addition to the squeeznet.py file given by the faster_RCNN pytorch repo itself) so I know it can work, but I must work out how to make it work.
This day I did no work because it was a holiday.
This day I continued working on the pytorch implementation of squeezenet. I solved the 4D/2D issue, even got trainval_net to run on squeeznet. However, this revealed a new issue: the loss did not calculate correctly. It first would output a loss in the thousands or higher (when it should be at most ~20), then after that reported 'nan'. The trained network output file did not run.
This day I attended the weekly lab-wide meeting where the director calls together my coworker, two student workers on summer leave, and me so that we may discuss our respective progress over the previous week and troubleshoot issues. I talked about getting "tantilizingly close" on getting squeezenet to work with our pytorch implementation. While our director was at a conference that last week, an invitation was given to our lab to submit a paper and talk to a conference in November (paper due early August) so we will have had much to prepare to meet said deadline.
Afterward, I read papers presented at the conference my director had attended that previous week; the papers were on the topic of neural network security. This was the topic of interest for the conference we were invited to.